`1 Regularization in Infinite Dimensional Feature Spaces
نویسندگان
چکیده
In this paper we discuss the problem of fitting `1 regularized prediction models in infinite (possibly non-countable) dimensional feature spaces. Our main contributions are: a. Deriving a generalization of `1 regularization based on measures which can be applied in non-countable feature spaces; b. Proving that the sparsity property of `1 regularization is maintained in infinite dimensions; c. Devising a path-following algorithm that can generate the set of regularized solutions in “nice” feature spaces; and d. Presenting an example of penalized spline models where this path following algorithm is computationally feasible, and gives encouraging empirical results.
منابع مشابه
Dimensionality Reduction and Learning : Ridge Regression vs . PCA
1 Intro The theme of these two lectures is that for L2 methods we need not work in infinite dimensional spaces. In particular, we can unadaptively find and work in a low dimensional space and achieve about as good results. These results question the need for explicitly working in infinite (or high) dimensional spaces for L2 methods. In contrast, for sparsity based methods (including L1 regulari...
متن کاملDimensionality Reduction and Learning
The theme of these two lectures is that for L2 methods we need not work in infinite dimensional spaces. In particular, we can unadaptively find and work in a low dimensional space and achieve about as good results. These results question the need for explicitly working in infinite (or high) dimensional spaces for L2 methods. In contrast, for sparsity based methods (including L1 regularization),...
متن کاملConsistency of the Group Lasso and Multiple Kernel Learning
We consider the least-square regression problem with regularization by a block 1-norm, i.e., a sum of Euclidean norms over spaces of dimensions larger than one. This problem, referred to as the group Lasso, extends the usual regularization by the 1-norm where all spaces have dimension one, where it is commonly referred to as the Lasso. In this paper, we study the asymptotic model consistency of...
متن کاملStochastic Particle Gradient Descent for Infinite Ensembles
The superior performance of ensemble methods with infinite models are well known. Most of these methods are based on optimization problems in infinite-dimensional spaces with some regularization, for instance, boosting methods and convex neural networks use L1-regularization with the non-negative constraint. However, due to the difficulty of handling L1-regularization, these problems require ea...
متن کاملLow-Rank Regularization for Sparse Conjunctive Feature Spaces: An Application to Named Entity Classification
Entity classification, like many other important problems in NLP, involves learning classifiers over sparse highdimensional feature spaces that result from the conjunction of elementary features of the entity mention and its context. In this paper we develop a low-rank regularization framework for training maxentropy models in such sparse conjunctive feature spaces. Our approach handles conjunc...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
عنوان ژورنال:
دوره شماره
صفحات -
تاریخ انتشار 2007